Interpretation of Meaningful Expressions by Integrating Gesture and Posture Modalities
نویسندگان
چکیده
Integration of different modalities aims to increase the robustness of a system and improves its performance in an unconstrained environment. With this motivation, this research is focused on integrating two modalities such as hand gesture and posture in a unified framework where the criterion is measured by employing Particle Filter system. The proposed framework has two main modules: 1) Gesture and posture recognition system, and 2) Particle filter based integration of these systems by incorporating CFG rules. In first module, the gesture and posture features are extracted and classified by HMM and SVM respectively. In the second module, the integration is carried by mapping the classification outcome on Particle filter system which acts as contribution-weight at decision level. Moreover, to infer and extract the “meaningful expressions” from the input sequence based on their contribution-weights, we have exploited regular grammar and developed the production rules using context free grammar (CFG) for our test scenario (i.e. a restaurant). Experiments are conducted on 500 different combinations of restaurant orders with the overall 98.3% inference accuracy whereas the classification accuracy of gesture and posture recognition is approximately 98.6% which proves the significance of proposed approach.
منابع مشابه
An Integrated HCI Framework for Interpreting Meaningful Expressions
Integration of different modalities aims to increase the robustness of system and improves its performance in an unconstrained environment. With this motivation, the proposed research is focused on integrating two modalities such as hand gesture and posture in a unified framework where the criterion is measured by employing Particle Filter system. The proposed framework has two main modules: 1)...
متن کاملDesign Challenges in Multi Modal Inference Systems for Human Computer Interaction
Human computer interfaces still lack the ability to identify and respond to users’ emotional and mental states, despite evidence that such knowledge may improve interaction (Picard, 1997). Human communication often supplements verbal messages with other channels, such as vocal nuances, facial expressions, posture and gesture, to convey emotions, attitudes, mental states and personality traits. ...
متن کاملFingertip Tracking and Hand Gesture Recognition by 3D Vision
This paper introduces an algorithm to track palm and fingertips based on images with depth data in real-time. Some meaningful hand gesture is then recognized by the detected palm and fingertips. The images with depth information are first captured by a Kinect camera. Then foreground and background are separated to pick up the potential hand. The fingertips are then detected by the curvature of ...
متن کاملIntegrating Facial, Gesture, and Posture Emotion Expression for a 3d Virtual Agent
Facial expressions, gestures, and body postures can portray emotions in a non-verbal way. These methods are frequently employed by actors in theatrical plays or in movies, or even by virtual characters such as those found in computer games, animated storybooks, and website e-assistants. Signals for emotion expressions (“cues”), such as a raised fist and narrowing of the eyes, substantially infl...
متن کاملHuman-Robot Interaction by Understanding Upper Body Gestures
In this paper, a human-robot interaction system based on a novel combination of sensors is proposed. It allows one person to interact with a humanoid social robot using natural body language. The robot understands the meaning of human upper body gestures and expresses itself by using a combination of body movements, facial expressions and verbal language. A set of 12 upper body gestures is invo...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2011